Goto

Collaborating Authors

 generalisation cost


The Generalisation Cost of RAMnets

Neural Information Processing Systems

Given unlimited computational resources, it is best to use a crite(cid:173) rion of minimal expected generalisation error to select a model and determine its parameters. However, it may be worthwhile to sac(cid:173) rifice some generalisation performance for higher learning speed. A method for quantifying sub-optimality is set out here, so that this choice can be made intelligently. Furthermore, the method is applicable to a broad class of models, including the ultra-fast memory-based methods such as RAMnets. This brings the added benefit of providing, for the first time, the means to analyse the generalisation properties of such models in a Bayesian framework .


Computational Invention of Cadences and Chord Progressions by Conceptual Chord-Blending

Eppe, Manfred (IIIA-CSIC, ICSI) | Confalonieri, Roberto (IIIA-CSIC) | MacLean, Ewen (University of Edinburgh) | Kaliakatsos, Maximos (Uniersity of Thessaloniki) | Cambouropoulos, Emilios (University of Thessaloniki) | Schorlemmer, Marco (IIIA-CSIC) | Codescu, Mihai (University of Magdeburg) | Kühnberger, Kai-Uwe (University of Osnabrück)

AAAI Conferences

We present a computational framework for chord invention based on a cognitive-theoretic perspective on conceptual blending. The framework builds on algebraic specifications, and solves two musicological problems. It automatically finds transitions between chord progressions of different keys or idioms, and it substitutes chords in a chord progression by other chords of a similar function, as a means to create novel variations. The approach is demonstrated with several examples where jazz cadences are invented by blending chords in cadences from earlier idioms, and where novel chord progressions are generated by inventing transition chords.


The Generalisation Cost of RAMnets

Rohwer, Richard, Morciniec, Michal

Neural Information Processing Systems

We follow a similar approach to (Zhu & Rohwer, to appear 1996) in using a Gaussian process to define a prior over the space of functions, so that the expected generalisation cost under the posterior can be determined. The optimal model is defined in terms of the restriction of this posterior to the subspace defined by the model. The optimum is easily determined for linear models over a set of basis functions. We go on to compute the generalisation cost (with an error bar) for all models of this class, which we demonstrate to include the RAMnets.


The Generalisation Cost of RAMnets

Rohwer, Richard, Morciniec, Michal

Neural Information Processing Systems

We follow a similar approach to (Zhu & Rohwer, to appear 1996) in using a Gaussian process to define a prior over the space of functions, so that the expected generalisation cost under the posterior can be determined. The optimal model is defined in terms of the restriction of this posterior to the subspace defined by the model. The optimum is easily determined for linear models over a set of basis functions. We go on to compute the generalisation cost (with an error bar) for all models of this class, which we demonstrate to include the RAMnets.


The Generalisation Cost of RAMnets

Rohwer, Richard, Morciniec, Michal

Neural Information Processing Systems

Neural Computing Research Group Aston University Aston Triangle, Birmingham B4 7ET, UK. Abstract Given unlimited computational resources, it is best to use a criterion ofminimal expected generalisation error to select a model and determine its parameters. However, it may be worthwhile to sacrifice somegeneralisation performance for higher learning speed. A method for quantifying sub-optimality is set out here, so that this choice can be made intelligently. Furthermore, the method is applicable to a broad class of models, including the ultra-fast memory-based methods such as RAMnets. This brings the added benefit of providing, for the first time, the means to analyse the generalisation properties of such models in a Bayesian framework . 1 Introduction In order to quantitatively predict the performance of methods such as the ultra-fast RAMnet, which are not trained by minimising a cost function, we develop a Bayesian formalism for estimating the generalisation cost of a wide class of algorithms.